ai reasoning
AAAI presidential panel – AI reasoning
In March 2025, the Association for the Advancement of Artificial Intelligence (AAAI), published a report on the Future of AI Research . The report, which was led by outgoing AAAI President Francesca Rossi covers 17 different AI topics and aims to clearly identify the trajectory of AI research in a structured way. As part of this project, members of the report team, and other selected AI practitioners, are taking part in a series of video panel discussions covering selected chapters from the report. In the third panel, the AI experts tackle the topic of AI reasoning. They consider the definition of reasoning, what reasoning is and what it should be in our AI models, planning techniques, model training, making smart (and not to smart choices) about which AI products to use, guarantees, why we shouldn't imitate human reasoning in AI models, thinking about the future, and more.
- North America > United States > Arizona (0.06)
- Europe > Netherlands > South Holland > Leiden (0.06)
- Europe > Germany (0.06)
Reversing the Lens: Using Explainable AI to Understand Human Expertise
Rahman, Roussel, Mishra, Aashwin Ananda, Hu, Wan-Lin
Both humans and machine learning models learn from experience, particularly in safety- and reliability-critical domains. While psychology seeks to understand human cognition, the field of Explainable AI (XAI) develops methods to interpret machine learning models. This study bridges these domains by applying computational tools from XAI to analyze human learning. We modeled human behavior during a complex real-world task -- tuning a particle accelerator -- by constructing graphs of operator subtasks. Applying techniques such as community detection and hierarchical clustering to archival operator data, we reveal how operators decompose the problem into simpler components and how these problem-solving structures evolve with expertise. Our findings illuminate how humans develop efficient strategies in the absence of globally optimal solutions, and demonstrate the utility of XAI-based methods for quantitatively studying human cognition.
- North America > United States > California > San Mateo County > Menlo Park (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.70)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.61)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (0.47)
Children's Mental Models of AI Reasoning: Implications for AI Literacy Education
Dangol, Aayushi, Wolfe, Robert, Zhao, Runhua, Kim, JaeWon, Ramanan, Trushaa, Davis, Katie, Kientz, Julie A.
As artificial intelligence (AI) advances in reasoning capabilities, most recently with the emergence of Large Reasoning Models (LRMs), understanding how children conceptualize AI's reasoning processes becomes critical for fostering AI literacy. While one of the "Five Big Ideas" in AI education highlights reasoning algorithms as central to AI decision-making, less is known about children's mental models in this area. Through a two-phase approach, consisting of a co-design session with 8 children followed by a field study with 106 children (grades 3-8), we identified three models of AI reasoning: Deductive, Inductive, and Inherent. Our findings reveal that younger children (grades 3-5) often attribute AI's reasoning to inherent intelligence, while older children (grades 6-8) recognize AI as a pattern recognizer. We highlight three tensions that surfaced in children's understanding of AI reasoning and conclude with implications for scaffolding AI curricula and designing explainable AI tools.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > New York > New York County > New York City (0.14)
- Europe > Iceland > Capital Region > Reykjavik (0.05)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Education > Curriculum (0.93)
- Leisure & Entertainment (0.68)
- Education > Educational Setting > K-12 Education > Secondary School (0.67)
- (2 more...)
Rethinking harmless refusals when fine-tuning foundation models
Pop, Florin, Rosenblatt, Judd, de Lucena, Diogo Schwerz, Vaiana, Michael
In this paper, we investigate the degree to which fine-tuning in Large Language Models (LLMs) effectively mitigates versus merely conceals undesirable behavior. Through the lens of semi-realistic role-playing exercises designed to elicit such behaviors, we explore the response dynamics of LLMs post fine-tuning interventions. Our methodology involves prompting models for Chain-of-Thought (CoT) reasoning and analyzing the coherence between the reasoning traces and the resultant outputs. Notably, we identify a pervasive phenomenon we term \emph{reason-based deception}, where models either stop producing reasoning traces or produce seemingly ethical reasoning traces that belie the unethical nature of their final outputs. We further examine the efficacy of response strategies (polite refusal versus explicit rebuttal) in curbing the occurrence of undesired behavior in subsequent outputs of multi-turn interactions. Our findings reveal that explicit rebuttals significantly outperform polite refusals in preventing the continuation of undesired outputs and nearly eliminate reason-based deception, challenging current practices in model fine-tuning. Accordingly, the two key contributions of this paper are (1) defining and studying reason-based deception, a new type of hidden behavior, and (2) demonstrating that rebuttals provide a more robust response model to harmful requests than refusals, thereby highlighting the need to reconsider the response strategies in fine-tuning approaches.
- North America > United States (0.14)
- Europe > Monaco (0.04)
- Asia > Middle East > Jordan (0.04)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Banking & Finance > Trading (1.00)
- (3 more...)
Unlocking the black box of AI reasoning -- GCN
While artificial intelligence has proved effective at many tasks critical to government -- such as protecting power grids against hacking -- some agencies have been reluctant to employ AI tools because their inner workings are unintelligible to humans. How can a solution be trusted if nobody knows how it works? With advanced technologies like artificial intelligence and machine learning, manipulated digital media will be easier to create and more difficult to detect. David Bau, a Ph.D. student at the Massachusetts Institute of Technology, thinks generative adversarial networks may help show how AI algorithms reach their conclusions. Bau and others are testing GANs not only as tools for performing tasks, such as pattern recognition, but for examining how neural networks made decisions.
Facebook makes big advances in AI reasoning and machine translation - SiliconANGLE
Facebook Inc. is using its @Scale conference today to provide an update on its progress in artificial intelligence research. The social media company is open-sourcing a new "AI reasoning" platform and providing some updates on its research into machine translation. It's part of a broad push to scale up AI workloads, a difficult task given the massive amounts of data needed to train AI models, Srinivas Narayanan (pictured), the lead for Facebook's Applied AI Research, said this morning at the conference in San Jose, California. "Facebook wouldn't be where it is today without AI," Narayanan said. "It's deeply integrated into everything we do."